Terminology for agent versus "standard" values in extensive games#642
Terminology for agent versus "standard" values in extensive games#642
Conversation
…ax_regret` -> `agent_max_regret` in Python
…` for expositional clarity.
rahulsavani
left a comment
There was a problem hiding this comment.
Quite a lot to check here, but I believe everything is fine. Two minor things, the second of which is more general than this PR:
-
enumpure_solvenow has nouse_strategicflag, but theNashComputationResultstills shows it as follows:NashComputationResult(method='enumpure', rational=True, use_strategic=True; similarly forenumpure_agent_solvewhereuse_strategicis thenFalsein theNashComputationResult. This might be fine as it indicates that type of the profiles under equilibria, but then perhaps that should be indicated another way rather than via this defunct argument? -
test_tutorials.py initially failed for me. On further inspection that was because it used a notebook checkpoint. I am not sure we want that behavior if it can be customized. If you agree we could ask Ed about looking into this.
|
|
|
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
# Conflicts: # doc/tutorials/03_stripped_down_poker.ipynb # src/games/stratmixed.h # src/pygambit/nash.pxi # src/tools/liap/liap.cc # tests/test_behav.py # tests/test_nash.py
|
@rahulsavani We now have the new, Nash-correct |
It seems |
# Conflicts: # tests/test_game.py
This was not one of my more clever moments. The C++ implementation is correct, it's just that The branch has been updated with this fix. I've also merged in other work on clarifying mixed strategy profile cacheing proactively to fix other potential cache invalidation problems we haven't stumbled upon yet. Assuming everything is working now, are we ready to merge or do we want to have the example set up as well? |
I will do some checks, and I can also do both final tasks, reviewing the tutorials/user guide and adding a tutorial on Myerson fig 4.2. |
|
@tturocy: Before merging, it would be good if you could review the new consistency checks in |
|
@tturocy: I did a first draft of the notebook. Will check it carefully in the morning. I could incorporate any suggestions for changes then in case you get a chance to look before. |
|
@edwardchalstrey1: There is now an advanced draft of a new notebook related to this PR, see: 04_agent_versus_non_agent_regret.ipynb If you get a chance to take a look, feedback would be welcome. We should then also decide how we want to present it/link to it in the user guide. |
| @@ -0,0 +1,705 @@ | |||
| { | |||
There was a problem hiding this comment.
| @@ -0,0 +1,705 @@ | |||
| { | |||
There was a problem hiding this comment.
Is it worth offering a brief description of the game and if possible, what you might intuitively expect from the equilibria computation?
Reply via ReviewNB
There was a problem hiding this comment.
It would definitely be nice. I will check Myerson's text in case a useful description is given, but as I understand it, it is a pathological example constructed specifically to show the difference between agent and non-agent versions of these notions.
| @@ -0,0 +1,705 @@ | |||
| { | |||
There was a problem hiding this comment.
At the moment, the tutorial that explains starting points is in the advanced_tutorials folder, if this assumes the reader has that knowledge already perhaps we need to reorder, or simply put this tutorial in the advanced folder and link to the other one here?
Reply via ReviewNB
There was a problem hiding this comment.
Oh, this is definitely not for non-expert users. As per my other reply to your comment, the agent versions are not actually particularly useful as I see it, but we have them already so we chose to keep them.
| @@ -0,0 +1,705 @@ | |||
| { | |||
There was a problem hiding this comment.
If the intended audience isn't likely to already know this, perhaps explain why computing these values is useful
Reply via ReviewNB
There was a problem hiding this comment.
Good idea -- it is meant to already be clear that we are interested in these being zero, as that gives a Nash eq, but I agree it will be useful to add that they are both measures of how close to equilibrium we are.
@rahulsavani nice, great to see draw_tree in action already! I have added some comments via ReviewNB - in short, it seems like this is intended to be a tutorial 4 read in the sequence of beginner tutorials, based on how you are describing the concepts/terminology throughout, so I suggested a few points of clarification or where detail could be added. However, I also note that this could go in the advanced tutorials instead, which would make this less necessary. |
Yes, draw_tree is great! Thanks for the comments -- I put this new one in advanced_tutorials after all. I also finished the penultimate task which involved replacing |
| @@ -39,7 +39,7 @@ | |||
| }, | |||
There was a problem hiding this comment.
There was a problem hiding this comment.
Excellent comment -- on reflection we should not use it, so I removed it!
|
@tturocy: I am done with this for now, this is ready to merge from my side, unless you have further suggestions for changes. |
# Conflicts: # src/tools/liap/liap.cc
…or to analysis. We are going to remove `.sort_infosets()` separately; for the moment this adjusts the tests to add an explicit call to clear the errors for clarity.
# Conflicts: # tests/games.py # tests/test_nash.py
This contains work for solving #617 .
Checklist of to-dos:
GetLiapValue/liap_valueandGetMaxRegret/max_regretare prefixed byAgent/agent.enumpure_agent_solveandliap_agent_solveimplemented and documented (and cross-linked with their non-agent versions)gambit-liapupdated to follow convention ofgambit-enumpure- default to using strategic form, use agent only if prompted explicitly to do so.liap_valueandmax_regret(and player regret) forMixedBehaviorProfile. These should be identical to their strategic equivalent.